- Governments across the world are at different stages of creating and enacting AI regulations.
- The EU may be the first to enact generative AI regulation. China takes a more restrictive stance.
- The US has relied on industry experts while the EU and Brazil aim to establish a categorical system.
Generative AI exploded into the public consciousness in late 2022. A year from now, the technology could be one of the most regulated in the industry.
Across the globe, from the US to the EU and Brazil, lawmakers are urgently considering, and in China’s case enacting, rules for managing AI and reigning in some of the more alarming use cases.
The EU is likely to be the first region to enact some form of oversight or regulation around generative AI. The European Commission, which includes two dozen countries, is in late-stage negotiations over the AI Act, which it dubbed “the world’s first rules on AI.” The expectation is for a final version of the AI Act to be agreed this year, meaning it would go into effect in late 2025.
Some AI tools could be banned in Europe
The Act was first proposed in 2021, before OpenAI released its generative AI tools ChatGPT and DALL-E, leading the likes of Meta, Google, and Microsoft, to become public players in and leading proponents of generative AI.
The EU draft regulation was updated this year. Currently, the Act’s biggest push is to categorize AI models and tools that are “high” risk or simply “unacceptable.” AI that falls under the high-risk category includes things like biometric identification, education, worker management, the legal system, and law enforcement, along with any tool which will need to be “assessed” and approved before release.
AI tools and uses deemed "unacceptable" will be banned in the EU, under the Act. That includes "remote biometric identification systems," or facial recognition technology, "social scoring," or categorizing people based on economic class and personal characteristics, and "cognitive behavioral manipulation," like voice-activated AI-powered toys.
As for generative AI, under the EU's proposed rules, disclosure of what is artificially generated content will be mandatory, as will disclosure of the data used to train any large language model. Given the increased scrutiny and legal action by authors and other creators of content that has been scraped off the internet and collected for massive sets of training data, companies behind AI tools and LLMs have started to not specify where their training data comes from. Companies will also need to show they've worked to mitigate legal risks before releasing new tools and models and register all foundational models in a database maintained by the EU Commission.
The US approach
The US is behind the EU when it comes to regulating AI. In September, the White House said it is "developing an executive order" on the technology and will pursue "bipartisan regulation." While the White House has been actively seeking advice from industry experts, the Senate has convened one hearing and one closed-door "AI Forum" with leaders from major tech companies. Neither event resulted in much action, despite Mark Zuckerberg being confronted during the forum with the fact that Meta's Llama 2 model gives a detailed guide for making anthrax. Still, American lawmakers say they are committed to some form of AI regulation.
"Make no mistake, there will be regulation," Senator Richard Blumenthal said during the hearing.
US copyright law could change as well. The Copyright Office in August said it is considering action or new federal rules around generative AI due to "widespread public debate about what these systems may mean for the future of creative industries." It opened a public comment period through early November, and it has already received more than 7,500 submissions.
What's happening in the UK
The UK, meanwhile, wants to become an "AI superpower," according to a March paper from its Department for Science, Innovation and Technology. Although the government body has created a "regulatory sandbox for AI," the UK has no immediate intention of introducing any new legislation to oversee it. Instead, it intends to assess AI as it progresses.
"By rushing to legislate too early, we would risk placing undue burdens on businesses," Michelle Donelan, the Secretary of State for Science, Innovation, and Technology, said. "As the technology evolves, our regulatory approach may also need to adjust."
Brazil and China
Brazil is taking a similar approach to the EU in categorizing AI tools and uses by "high" or "excessive" risk and will ban those found to be in the latter category in a draft legislation update earlier this year. The proposed law was described by tech advisory firm Access Partnership as having a "robust human rights" focus while outlining a "strict liability regime." Brazil is set to hold creators of an LLM liable for harm caused by any AI system deemed "high-risk."
However, it's China where some of the only new regulations on AI have been enacted. Despite that nation's widespread usage of things like facial recognition for government surveillance, China in the last two years enacted rules on recommendation algorithms, a core use case for AI, and the following year further rules on "deep synthesis" tech, better known as "deep fakes." Now, it's looking to regulate generative AI. One of the most notable requirements under a draft law is that any LLM, and its training data, needs to be "true and accurate."
That one requirement could be enough to keep consumer-level generative AI out of China almost entirely. As put in a report on China's rules from the Carnegie Endowment for International Peace, a nonpartisan think tank, it's "a potentially insurmountable hurdle for AI chatbots to clear."